32 research outputs found

    A Hint-Based Random Access Protocol for mMTC in 5G Mobile Network

    Get PDF
    With the increasing popularity of machine-type communication (MTC) devices, several new challenges are encountered by the legacy long term evolution (LTE) system. One critical issue is that a massive number of MTC devices trying to conduct random access procedures may cause significant collisions and long delays. In this work, we present a new random access mechanism by splitting the contention-based preambles in LTE into two logically disjoint parts, one for the user equipment (UE) being paged and the other for the UEs not being paged. Since the IDs of paged UEs are known by the base station, a novel hash-based random access, which we call hint, is possible. The main idea is to pre-allocate preambles to paged UEs in a contention-free manner and confines non-paged UEs to contend in a separate region. We further build a mathematical model to find the optimal ratio of pre-allocated preambles. Extensive simulations are conducted to validate our results

    r-Hint: A message-efficient random access response for mMTC in 5G networks

    Get PDF
    Massive Machine Type Communication (mMTC) has attracted increasing attention due to the explosive growth of IoT devices. Random Access (RA) for a large number of mMTC devices is especially difficult since the high signaling overhead between User Equipments (UEs) and an eNB may overwhelm the available spectrum resources. To address this issue, we propose “respond by hint” (r-Hint), an ID-free handshaking protocol for contention-based RA in mMTC. The core idea of r-Hint is to avoid sequentially notifying contending UEs of their IDs by broadcasting a hint in the RA Response (RAR). To do so, we exploit the concept of prime factorization and hashing to encode the hint such that UEs can extract their required information accordingly. Our simulation results show that r-Hint reduces the RAR message size by 20%–40%. Such reduction can be translated to around 50% improvement of spectrum efficiency in LTE-M

    On Scalable Service Function Chaining with O(1) Flowtable Entries

    Get PDF
    The emergence of Network Function Virtualization (NFV) enables flexible and agile service function chaining in a Software Defined Network (SDN). While this virtualization technology efficiently offers customization capability, it however comes with a cost of consuming precious TCAM resources. Due to this, the number of service chains that an SDN can support is limited by the flowtable size of a switch. To break this limitation, this paper presents CRT-Chain, a service chain forwarding protocol that requires only constant flowtable entries, regardless of the number of service chain requests. The core of CRT-Chain is an encoding mechanism that leverages Chinese Remainder Theorem (CRT) to compress the forwarding information into small labels. A switch does not need to insert forwarding rules for every service chain request, but only needs to conduct very simple modular arithmetic to extract the forwarding rules directly from CRT-Chain's labels attached in the header. We further incorporate prime reuse and path segmentation in CRT-Chain to reduce the header size and, hence, save bandwidth consumption. Our evaluation results show that, when a chain consists of no more than 5 functions, CRT-Chain actually generates a header smaller than the legacy 32-bit header defined in IETF. By enabling prime reuse and segmentation, CRT-Chain further reduces the total signaling overhead to a level lower than the conventional scheme, showing that CRT-Chain not only enables scalable flowtable-free chaining but also improves network efficiency

    On Optimizing Signaling Efficiency of Retransmissions for Voice LTE

    Get PDF
    The emergence of voice over LTE enables voice traffic transmissions over 4G packet-switched networks. Since voice traffic is characterized by its small payload and frequent transmissions, the corresponding control channel overhead would be high. Semi-persistent scheduling (SPS) is hence proposed in LTE-A to reduce such overhead. However, as wireless channels typically fluctuate, tremendous retransmissions due to poor channel conditions, which are still scheduled dynamically, would lead to a large overhead. To reduce the control message overhead caused by SPS retransmissions, we propose a new SPS retransmission protocol. Different from traditional SPS, which removes the downlink control indicators (DCI) directly, we compress some key fields of all retransmissions' DCIs in the same subframe as a fixed-length hint. Thus, the base station does not need to send this information to different users individually but just announces the hint as a broadcast message. In this way, we reduce the signaling overhead and at the same time, preserve the flexibility of dynamic scheduling. Our simulation results show that, by enabling DCI compression, our design improves signaling efficiency by 2.16\times, and the spectral utilization can be increased by up to 60%

    Hey! I Have Something for You: Paging Cycle Based Random Access for LTE-A

    Get PDF
    The surge of M2M devices imposes new challenges for the current cellular network architecture, especially in radio access networks. One of the key issues is that the M2M traffic, characterized by small data and massive connection requests, makes significant collisions and congestion during network access via the random access (RA) procedure. To resolve this problem, in this paper, we propose a paging cycle-based protocol to facilitate the random access procedure in LTE-A. The high-level idea of our design is to leverage a UE's paging cycle as a hint to preassign RA preambles so that UEs can avoid preamble collisions at the first place. Our rpHint has two modes: (1) collision-free paging, which completely prevents cross-collision between paged user equipment (UEs) and random access UEs, and (2) collision-avoidance paging, which alleviates cross-collision. Moreover, we formulate a mathematical model to derive the optimal paging ratio that maximizes the expected number of successful UEs. This analysis also allows us to adapt dynamically to the better one between the two modes. We show via extensive simulations that our design increases the number of successful UEs in an RA procedure by more than 3Ă— as compared to the legacy RA scheme of the LTE

    Preference-aware content dissemination in opportunistic mobile social networks

    Full text link
    Abstract—As mobile devices have become more ubiquitous, mobile users increasingly expect to utilize proximity-based con-nectivity, e.g., WiFi and Bluetooth, to opportunistically share multimedia content based on their personal preferences. However, many previous studies investigate content dissemination protocols that distribute a single object to as many users in an opportunistic mobile social network as possible without considering user preference. In this paper, we propose PrefCast, a preference-aware content dissemination protocol that targets on maximally satisfying user preference for content objects. Due to non-persistent connectivity between users in a mobile social network, when a user meets neighboring users for a limited contact duration, it needs to efficiently disseminate a suitable set of objects that can bring possible future contacts a high utility (the quantitative metric of preference satisfaction). We formulate such a problem as a maximum-utility forwarding model, and propose an algorithm that enables each user to predict how much utility it can contribute to future contacts and solve its optimal forwarding schedule in a distributed manner. Our trace-based evaluation shows that PrefCast can produce a 18.5 % and 25.2 % higher average utility than the protocols that only consider contact frequency or preference of local contacts, respectively. I

    Rate adaptation for 802.11 multiuser mimo networks

    Get PDF
    In multiuser MIMO (MU-MIMO) networks, the optimal bit rate of a user is highly dynamic and changes from one packet to the next. This breaks traditional bit rate adaptation algorithms, which rely on recent history to predict the best bit rate for the next packet. To address this problem, we introduce TurboRate, a rate adaptation scheme for MU-MIMO LANs. TurboRate shows that clients in a MU-MIMO LAN can adapt their bit rate on a per-packet basis if each client learns two variables: its SNR when it transmits alone to the access point, and the direction along which its signal is received at the AP. TurboRate also shows that each client can compute these two variables passively without exchanging control frames with the access point. A TurboRate client then annotates its packets with these variables to enable other clients to pick the optimal bit rate and transmit concurrently to the AP. A prototype implementation in USRP-N200 shows that traditional rate adaptation does not deliver the gains of MU-MIMO WLANs, and can interact negatively with MU-MIMO, leading to low throughput. In contrast, enabling MU-MIMO with TurboRate provides a mean throughput gain of 1.7x and 2.3x, for 2-antenna and 3-antenna APs respectively.National Science Council (China) (contract No. NSC 100-2221-E-001-005-MY2)National Science Foundation (U.S.) (NSF Grant CNS-1117194
    corecore